-
Notifications
You must be signed in to change notification settings - Fork 30
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
First cut of the Transformation Playground Frontend and Backend #1190
base: main
Are you sure you want to change the base?
Conversation
Signed-off-by: Chris Helma <[email protected]>
Signed-off-by: Chris Helma <[email protected]>
Signed-off-by: Chris Helma <[email protected]>
Signed-off-by: Chris Helma <[email protected]>
Signed-off-by: Chris Helma <[email protected]>
Signed-off-by: Chris Helma <[email protected]>
Signed-off-by: Chris Helma <[email protected]>
Signed-off-by: Chris Helma <[email protected]>
Signed-off-by: Chris Helma <[email protected]>
Signed-off-by: Chris Helma <[email protected]>
Signed-off-by: Chris Helma <[email protected]>
Signed-off-by: Chris Helma <[email protected]>
Signed-off-by: Chris Helma <[email protected]>
TransformationPlayground/tp_backend/transform_expert/prompting/__init__.py
Outdated
Show resolved
Hide resolved
TransformationPlayground/tp_backend/transform_expert/tests/utils/test_transforms.py
Outdated
Show resolved
Hide resolved
Signed-off-by: Chris Helma <[email protected]>
Signed-off-by: Chris Helma <[email protected]>
Signed-off-by: Chris Helma <[email protected]>
Signed-off-by: Chris Helma <[email protected]>
- Do not attempt to be friendly in your responses. Be as direct and succint as possible. | ||
- Think through the problem, extract all data from the task and the previous conversations before creating a plan. | ||
- Never assume any parameter values while invoking a tool or function. | ||
- You may ask clarifying questions to the user if you need more information. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oof, definitely don't this line in here. Remove.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for getting this in front of the team @chelma
I did not get a chance to refine the associated jira's in advance of this change arriving - I expect there is going to be back and forth on a number of comments I've raised.
.gitignore
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
While there is a readme about this project, it doesn't have an architecture document, lets build one out similar to how we have with RFS but (with scope adjusted).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good callout. I have a bunch of existing docs that can be translated into this format.
'ENGINE': 'django.db.backends.sqlite3', | ||
'NAME': BASE_DIR / 'db.sqlite3', |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What goes in this db?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good question. In the future, I expect we'll store:
- The user-loaded inputs from the left panel (what if you have a snapshot or source cluster w/ 10k+ indices? Probably should be stored client-side), which we'll then display portions of in the client.
- The transformation logic for each "input shape" that the user has made manual changes to, along with the full validation history and the LLM conversation. This will enable us to deploy all the transformations in a bundle that replay/backfill can use, it will facilitate our (the team's) debugging efforts, facilitate a trackable history of what the LLM's actions were and how they were incorporated into the Cx migration (think for security owners), and it will facilitate LLM Model selection/evaluation as well as LLM Fine-Tuning (we could have a collection of real migrations to tune with).
- It may also store the validation test logic for each input shape. Imagine if, as part of an assessment process, we have an LLM propose additional tests we should be running for validation and then dynamically create and incorporate them. This would be really useful for validating behavior (does the target behave the way we want), not just API spec conformance (does it return a 2XX).
'transform_api_debug_file': { | ||
'level': 'DEBUG', | ||
'class': 'logging.FileHandler', | ||
'filename': 'logs/transform_api.debug.log', | ||
'formatter': 'verbose', | ||
}, | ||
'transform_api_info_file': { | ||
'level': 'INFO', | ||
'class': 'logging.FileHandler', | ||
'filename': 'logs/transform_api.info.log', | ||
'formatter': 'verbose', | ||
}, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Double checking, does the debug log include the info level logs too? Is this in alignment with the logging working group session we ran?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, it includes both sets of logs. I don't remember the working group session, is there a link to its action items/artifacts?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for adding tests, lets make sure to update the CI.yml to run these tests so we have code coverage information as well.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Will investigate!
llm = ChatBedrockConverse( | ||
model="anthropic.claude-3-5-sonnet-20240620-v1:0", # This is the older version of the model, could be updated | ||
temperature=0, # Suitable for straightforward, practical code generation | ||
max_tokens=4096, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How many tokens will we need? Can you frame how we choose this target?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In the future, I expect this to be fully configurable by the user - what if they want to use their own LLM, for example, rather than bedrock? We'll need to figure out what the right level of configurability is for initial user feedback. On the token front - that is the maximum number of output tokens, not input. 4k tokens is a LOT of code, I would be highly surprised if we ever had a transform that needed anywhere close to that. For reference, the ~450 lines of text in the OS 2.7 "knowledge" file is only ~3500 tokens.
</Container> | ||
|
||
{/* Testing/Output Column */} | ||
<Container header="Testing & Output Panel"> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How do you cycle back and forth between source / transformation logic / output?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I haven't implemented this all yet (obviously), but here's the user journey I'm envisioning:
- The user loads in their input shapes. Currently, that's just copy/pasting JSON into a text field, but soon I'd like to be able to load the contents of a snapshot into the Playground via a configurable process. Imagine a dialogue where you select the S3 location of your snapshot, something extracts the templates, indices, and a configurable number of documents (that's a whole workstream right there), and presents them in the left panel so you can see everything in your snapshot, then click on specific items to see their JSON, etc. Even once we've added the ability to read from snapshots, we'll want to leave open the possibility of adding manual input shapes. Example - the user doesn't have access to a snapshot of their production cluster but they do have access to the settings JSON and a few documents, which they can manually enter as shapes into the Playground for testing.
- The user then uses the dropdowns to select what type of transformation they are performing. The "transform type" (Index vs. template vs. documents) dropdown will disappear, because it will be obvious from context (the user selected an index on the input panel, so...).
- Once that is selected, we will have a pre-canned transform (we'll hand-craft and save, no GenAI) that is supplied based on those selections that will serve as a default. The default will be populated into the UI and pinned against that shape in the backend.
- The user can then hit a "test" button to see how that default transform will work against their input shape (e.g. we kick off the validation process).
- If the user wants to include functional tests into validation (e.g. actually creating/deleting indices against a real test cluster), they can go through a dialogue to set up and test a connection to their target cluster. Currently, that process is just "paste a URL into a text field and assume there's no auth to worry about").
- If the user likes the results of the default, pre-canned transform and it passes validation - great! No further work needed. If not, they can either modify the transform directly in the UI and run the validation process again, or they can ask the GenAI assistant to modify the existing transform in some way (not currently implemented, but expect to be easy). Either way, the new transform is then pinned against the input shape in the backend so that we know there's something custom going on.
- The user then selects a different input shape and goes through this process until everything is transforming as they desire, then they can either hit a button to "deploy" their transformed data/metadata to the target cluster (think for metadata migrations which are inherently "low scale") or "bundle" the transformations for inclusion into the Migrations Assistant backfill/replay processes (which are anything but "low scale"). For the "deploy" button, this is basically just skipping the cleanup step of the validation process we've already performed (e.g. take the transformed output and PUT it against the target, but don't delete it afterwards). For bundling, this is taking the stuff we have in the server-side DB, packaging it into a tarball/zip/whatever, and sticking it somewhere (S3?) so that the backfill and replayer processes can pick it up and load the transform objects.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This seems like 3 different experience, sourcing input json, transformation editing, and viewing output json. Seems like these should be decoupled into different pages, but I'm not sure of the overall workflow.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think there's a lot of benefit in visualizing the three things together; I outlined the user-journey I'm imagining above here. Curious what you think after reading that.
[1] #1190 (comment)
</Container> | ||
|
||
{/* Transformation Column */} | ||
<Container header="Transformation Panel"> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This seems like it should be refactored into its own control to reduce overall coupling
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would love more details! But yeah - this whole page is ripe for refactoring.
const sourceVersionOptions: SelectProps.Options = [{ label: "Elasticsearch 6.8", value: "Elasticsearch 6.8" }]; | ||
const targetVersionOptions: SelectProps.Options = [{ label: "OpenSearch 2.17", value: "OpenSearch 2.17" }]; | ||
const transformTypeOptions: SelectProps.Options = [{ label: "Index", value: "Index" }]; | ||
const transformLanguageOptions: SelectProps.Options = [{ label: "Python", value: "Python" }]; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Seems like this data should come from the API, or there should be a mapping of API data against the strings for visualization
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
+1. I think this comes down to a shared spec between the frontend and backend; this also applies to the shape of the requests/responses that are going over the wire. I know we've briefly discussed OpenAPI or something as a model format.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Lets use the existing jdango api server in console_api
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think there's a lot of benefit in separating this as a distinct project:
- The Playground is conceptually only loosely coupled to the rest of the Migration Assistant. There's no reason the snapshot it's reading inputs from needs to come from a
console snapshot create
command. Similarly, there's no reason the target cluster used for validation (or we do low-scale deployment against) needs to have been created as a part of the Migration Assistant setup process. The best argument might be around the format of the transformation bundle being coupled to the Migration Assistant, but there's no reason we couldn't specify that format to be generic. That said - you don't need to care about the rest of Migration Assistant at all for the Playground to be a useful way of creating/testing these transformations. - The Playground is a very different experience than the general operator workflow. It's inherently interactive and cyclical in a way that running basic configuration commands is not.
- The Playground is generically useful and extensible as a standalone product feature. At it's core, it's a way to visualize data/metadata, have a GenAI assisted process for modifying/transforming it, testing the modifications/transformations, and some mechanisms for deployment. Swapping out what specifically it's using as an input or testing against or has a desired output is inherently modular. There's already multiple other use-cases we've discussed for where this could apply in other domains outside of Elasticsearch to OpenSearch migrations.
The key thing in my mind is that users of the Migration Assistant should have a cohesive overall experience; but I think tightly coupling the Playground to the Migration Assistant Operator API/GUI in a way that prevents re-use would be a mistake.
Signed-off-by: Chris Helma <[email protected]>
logger.debug(f"Validation report entries:\n{validation_report.report_entries}") | ||
except TestTargetInnaccessibleError as e: | ||
logger.error(f"Target cluster is not accessible: {str(e)}") | ||
return Response({'error': str(e)}, status=status.HTTP_400_BAD_REQUEST) |
Check warning
Code scanning / CodeQL
Information exposure through an exception Medium
Stack trace information
except Exception as e: | ||
logger.error(f"Testing process failed: {str(e)}") | ||
logger.exception(e) | ||
return Response({'error': str(e)}, status=status.HTTP_500_INTERNAL_SERVER_ERROR) |
Check warning
Code scanning / CodeQL
Information exposure through an exception Medium
Stack trace information
Signed-off-by: Chris Helma <[email protected]>
Signed-off-by: Chris Helma <[email protected]>
Description
transforms/index/
path to create some Python code that will transform an input ES 6.8 Index Settings w/ multi-type mappings or ES 7.10 Index Settings into equivalent OpenSearch 2.X settings. The API returns the transform code, the result of invoking the transform against the user's input, and the validation results.Issues Resolved
Testing
The default view when you open the GUI, with an input ES 6.8 JSON filled in
The GenAI recommended transformation for that input JSON
The modal window for modifying the GenAI recommendations
The results of applying the user guidance in a new, GenAI recommend transform
Check List
By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.
For more information on following Developer Certificate of Origin and signing off your commits, please check here.